9 research outputs found

    A Survey of Multi-Agent Human-Robot Interaction Systems

    Full text link
    This article presents a survey of literature in the area of Human-Robot Interaction (HRI), specifically on systems containing more than two agents (i.e., having multiple humans and/or multiple robots). We identify three core aspects of ``Multi-agent" HRI systems that are useful for understanding how these systems differ from dyadic systems and from one another. These are the Team structure, Interaction style among agents, and the system's Computational characteristics. Under these core aspects, we present five attributes of HRI systems, namely Team size, Team composition, Interaction model, Communication modalities, and Robot control. These attributes are used to characterize and distinguish one system from another. We populate resulting categories with examples from recent literature along with a brief discussion of their applications and analyze how these attributes differ from the case of dyadic human-robot systems. We summarize key observations from the current literature, and identify challenges and promising areas for future research in this domain. In order to realize the vision of robots being part of the society and interacting seamlessly with humans, there is a need to expand research on multi-human -- multi-robot systems. Not only do these systems require coordination among several agents, they also involve multi-agent and indirect interactions which are absent from dyadic HRI systems. Adding multiple agents in HRI systems requires advanced interaction schemes, behavior understanding and control methods to allow natural interactions among humans and robots. In addition, research on human behavioral understanding in mixed human-robot teams also requires more attention. This will help formulate and implement effective robot control policies in HRI systems with large numbers of heterogeneous robots and humans; a team composition reflecting many real-world scenarios.Comment: 23 pages, 7 figure

    Can a robot catch you lying? A machine learning system to detect lies during interactions.

    Get PDF
    Deception is a complex social skill present in human interactions. Many social professions such as teachers, therapists and law enforcement officers leverage on deception detection techniques to support their working activities. Robots with the ability to autonomously detect deception could provide an important aid to human-human and human-robot interactions. The objective of this work is to demonstrate that it is possible to develop a lie detection system that could be implemented on robots. To this goal, we focus on human human and human robot interaction to understand if there is a difference in the behavior of the participants when lying to a robot or to a human. Participants were shown short movies of robberies and then interrogated by a human and by a humanoid robot "detectives". According to the instructions, subjects provided veridical responses to half of the question and false replies to the other half. Behavioral variables such as eye movements, time to respond and eloquence were measured during the task, while personality traits were assessed before experiment initiation. Participant's behavior showed strong similarities during the interaction with the human and the humanoid. Moreover, the behavioral features were used to train and test a lie detection algorithm. The results show that the selected behavioral variables are valid markers of deception both in human-human and in human-robot interactions and could be exploited to effectively enable robots to detect lies.

    Overtrusting robots: Setting a research agenda to mitigate overtrust in automation

    Get PDF
    There is increasing attention given to the concept of trustworthiness for artificial intelligence and robotics. However, trust is highly context-dependent, varies among cultures, and requires reflection on others’ trustworthiness, appraising whether there is enough evidence to conclude that these agents deserve to be trusted. Moreover, little research exists on what happens when too much trust is placed in robots and autonomous systems. Conceptual clarity and a shared framework for approaching overtrust are missing. In this contribution, we offer an overview of pressing topics in the context of overtrust and robots and autonomous systems. Our review mobilizes insights solicited from in-depth conversations from a multidisciplinary workshop on the subject of trust in human–robot interaction (HRI), held at a leading robotics conference in 2020. A broad range of participants brought in their expertise, allowing the formulation of a forward-looking research agenda on overtrust and automation biases in robotics and autonomous systems. Key points include the need for multidisciplinary understandings that are situated in an eco-system perspective, the consideration of adjacent concepts such as deception and anthropomorphization, a connection to ongoing legal discussions through the topic of liability, and a socially embedded understanding of overtrust in education and literacy matters. The article integrates diverse literature and provides a ground for common understanding for overtrust in the context of HRI.publishedVersio

    Can a Robot Catch You Lying? A Machine Learning System to Detect Lies During Interactions

    No full text
    Deception is a complex social skill present in human interactions. Many social professions such as teachers, therapists and law enforcement officers leverage on deception detection techniques to support their work activities. Robots with the ability to autonomously detect deception could provide an important aid to human-human and human-robot interactions. The objective of this work is to demonstrate the possibility to develop a lie detection system that could be implemented on robots. To this goal, we focus on human and human robot interaction to understand if there is a difference in the behavior of the participants when lying to a robot or to a human. Participants were shown short movies of robberies and then interrogated by a human and by a humanoid robot "detectives." According to the instructions, subjects provided veridical responses to half of the question and false replies to the other half. Behavioral variables such as eye movements, time to respond and eloquence were measured during the task, while personality traits were assessed before experiment initiation. Participant's behavior showed strong similarities during the interaction with the human and the humanoid. Moreover, the behavioral features were used to train and test a lie detection algorithm. The results show that the selected behavioral variables are valid markers of deception both in human-human and in human-robot interactions and could be exploited to effectively enable robots to detect lies

    Heat Stress and Reproduction

    No full text
    corecore